Use the requests module to make a HTTP request to http://www.github.com/ibm
Get status code for the request
In [1]:
url = 'http://www.github.com/ibm'
Get header information
In [ ]:
Get the body Information
In [ ]:
The way these work is similar to viewing a web page. When you point your browser to a website, you do it with a URL (http://www.github.com/ibm for instance). Github sends you back data containing HTML, CSS, and Javascript. Your browser uses this data to construct the page that you see. The API works similarly, you request data with a URL (http://api.github.com/org/ibm), but instead of getting HTML and such, you get data formatted as JSON.
In [9]:
Authenticate requests to increase the API request limit. Access data that requires authentication.
Unfortunately different websites have different ways of generating and using the token and consumer keys. Hence we will need to write the authorization code for each website seperately. HOwever, every website provides detailed information on how you can generate and send the token and keys.
In [ ]:
In [13]:
In [ ]:
In [ ]:
Write a code to append data row wise to a csv file
In [ ]:
import csv
WRITE_CSV = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 2/ipython/Repo_csv.csv"
with open(WRITE_CSV, 'at',encoding = 'utf-8', newline='') as csv_obj:
write = csv.writer(csv_obj) # Note it is csv.writer not reader
write.writerow(['REPO ID','REPO NAME'])
In [ ]:
from google.colab import drive
from google.colab import files
drive.mount('/content/drive/')
uploaded = files.upload()
What do you think will happen if we use 'wt' as mode instead of 'at' ?
Write a program so that you save the IBM repositories into the CSV file. So that each row is a new repository and column 1 is the ID and column 2 is the name
In [ ]:
#Enter code here